Rubric-based Automated Japanese Short-answer Scoring and Support System Applied to QALab-3
نویسندگان
چکیده
We have been developing an automated Japanese shortanswer scoring and support machine for the new National Center written test exams. Our approach is based on the fact that accurately recognizing textual entailment and/or synonymy has been almost impossible. The system generates automated scores on the basis of evaluation criteria or rubrics, and human raters revise them. The system determines semantic similarity between the model answers and the actual written answers as well as a certain degree of semantic identity and implication. An experimental prototype operates as a web system on a Linux computer. To evaluate the performance, we applied the method to the second round of entrance examinations given by the University of Tokyo. We compared human scores with the automated scores for a case in which 20 allotment points were placed in five test issues of a world-history test as part of a trial examination. The differences between the scores were within 3 points for 16 of 20 data provided by the NTCIR QALab-3 task office.
منابع مشابه
Presentation of an efficient automatic short answer grading model based on combination of pseudo relevance feedback and semantic relatedness measures
Automatic short answer grading (ASAG) is the automated process of assessing answers based on natural language using computation methods and machine learning algorithms. Development of large-scale smart education systems on one hand and the importance of assessment as a key factor in the learning process and its confronted challenges, on the other hand, have significantly increased the need for ...
متن کاملPresentation of an efficient automatic short answer grading model based on combination of pseudo relevance feedback and semantic relatedness measures
Automatic short answer grading (ASAG) is the automated process of assessing answers based on natural language using computation methods and machine learning algorithms. Development of large-scale smart education systems on one hand and the importance of assessment as a key factor in the learning process and its confronted challenges, on the other hand, have significantly increased the need for ...
متن کاملOn the Automatic Scoring of Handwritten Essays
Automating the task of scoring short handwritten student essays is considered. The goal is to assign scores which are comparable to those of human scorers by coupling two AI technologies: optical handwriting recognition and automated essay scoring. The test-bed is that of essays written by children in reading comprehension tests. The process involves several image-level operations: removal of p...
متن کاملKitAi-QA: A Question Answering System for NTCIR-12 QALab-2
This paper describes a question answering system for NTCIR12 QALab-2. The task that we participated in is the Japanese task about National Center Test and Mock exams. Our method consists of two stages; a scoring method and answer selection methods for four question types. The scoring is to detect the evidence for the next process, namely answer selection, from textbooks. We also focus on confli...
متن کاملWUST System at NTCIR-12 QALab-2 Task
This paper describes our question answering system at NTCIR-12 on QALab-2 task, which requires solving the history questions of Japanese university entrance exams and their corresponding English translations. Wikipedia of English edition is main external knowledge base for our system. We first retrieve the documents and sentences related to the question from Wikipedia. Then, the classification ...
متن کامل